51 research outputs found

    Building and exploiting context on the web

    Get PDF
    [no abstract

    Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing

    Get PDF
    In the 21st century, where automated systems and artificial intelligence are replacing arduous manual labor by supporting data-intensive tasks, many problems still require human intelligence. Over the last decade, by tapping into human intelligence through microtasks, crowdsourcing has found remarkable applications in a wide range of domains. In this article, the authors discuss the growth of crowdsourcing systems since the term was coined by columnist Jeff Howe in 2006. They shed light on the evolution of crowdsourced microtasks in recent times. Next, they discuss a main challenge that hinders the quality of crowdsourced results: the prevalence of malicious behavior. They reflect on crowdsourcing's advantages and disadvantages. Finally, they leave the reader with interesting avenues for future research

    User Profile Based Activities in Flexible Processes

    No full text
    International audienceCOOPER platform is a collaborative, open environment that leverages on the idea of flexible, user-centric process support. It allows cooperating team members to define collaborative processes and flexibly modify the process activities even during process execution. In this paper we describe how the incorporation of decentralized user data through mashups, allows the COOPER platform to support the definition and execution of the so called user profile based activities, i.e., process activities that are adapted based on the preferences of the process actors. We define two basic types of user profile based activities, namely user adapted activities and user conditional activities. The first are modeled according to the user profile data, while the second employs the same user data to enable automatic workflow decisions

    Interlinking documents based on semantic graphs

    Get PDF
    Connectivity and relatedness of Web resources are two concepts that define to what extent different parts are connected or related to one another. Measuring connectivity and relatedness between Web resources is a growing field of research, often the starting point of recommender systems. Although relatedness is liable to subjective interpretations, connectivity is not. Given the Semantic Web's ability of linking Web resources, connectivity can be measured by exploiting the links between entities. Further, these connections can be exploited to uncover relationships between Web resources. In this paper, we apply and expand a relationship assessment methodology from social network theory to measure the connectivity between documents. The connectivity measures are used to identify connected and related Web resources. Our approach is able to expose relations that traditional text-based approaches fail to identify. We validate and assess our proposed approaches through an evaluation on a real world dataset, where results show that the proposed techniques outperform state of the art approaches.CAPESEU/FP7/2007-2013CNPFAPER

    Exploiting the wisdom of the crowds for characterizing and connecting heterogeneous resources

    Full text link
    Heterogeneous content is an inherent problem for cross-system search, recommendation and personalization. In this paper we investigate differences in topic coverage and the impact of topicstopics in different kinds of Web services. We use entity extraction and categorization to create ‘fin-gerprints ’ that allow for meaningful comparison. As a basis taxonomy, we use the 23 main categories of Wikipedia Cat-egory Graph, which has been assembled over the years by the wisdom of the crowds. Following a proof of concept of our approach, we analyze differences in topic coverage and topic impact. The results show many differences between Web services like Twitter, Flickr and Delicious, which re-flect users ’ behavior and the usage of each system. The paper concludes with a user study that demonstrates the benefits of fingerprints over traditional textual methods for recommendations of heterogeneous resources

    Annotation tool for enhancing e-learning courses

    Get PDF
    One of the most popular forms of learning is through reading and for years we have used hard copy documents as the main material to learn. With the advent of the Internet and the fast development of new technologies, new tools have been developed to assist the learning process. However, reading is still the main learning method that is an individual activity. In this paper we propose a highlighting tool that enables the reading and learning process to become a collaborative and shared activity. In other words, the highlighting tool supports the so-called active-reading, a well-known and efficient means of learning. The highlighting tool brings to the digital environment the same metaphor of the traditional highlight marker and puts it in a social context. It enables users to emphasize certain portions of digital learning objects. Furthermore, it provides students, tutors, course coordinators and educational institutions new possibilities in the teaching and learning process. In this work we expose the first quantitative and qualitative results regarding the use of the highlight tool by over 750 students through 8 weeks of courses. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-33642-3_6.EC/ECP 2008 EDU 428016CAPE

    Combining a co-occurrence-based and a semantic measure for entity linking

    Get PDF
    One key feature of the Semantic Web lies in the ability to link related Web resources. However, while relations within particular datasets are often well-defined, links between disparate datasets and corpora of Web resources are rare. The increasingly widespread use of cross-domain reference datasets, such as Freebase and DBpedia for annotating and enriching datasets as well as documents, opens up opportunities to exploit their inherent semantic relationships to align disparate Web resources. In this paper, we present a combined approach to uncover relationships between disparate entities which exploits (a) graph analysis of reference datasets together with (b) entity co-occurrence on the Web with the help of search engines. In (a), we introduce a novel approach adopted and applied from social network theory to measure the connectivity between given entities in reference datasets. The connectivity measures are used to identify connected Web resources. Finally, we present a thorough evaluation of our approach using a publicly available dataset and introduce a comparison with established measures in the field. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38288-8_37

    ID5.15: New Core specifications v2

    Get PDF
    Mazzetti, A., Dicerto, M., Grigorov, A., Zerr, S., De Coi, J. L., Kawase, R., Perez, M., Roldan, A., Mittal, P. (2009). ID5.15: New Core specifications v2.This document describes the new functionalities for LearnWeb/KRService, which will be implemented in the new version called V0.3 and which will be available in late spring 2009. The main topics cover: resource functionalities, integration with other TENCompetence tools and social functionalities. This document describes LearnWeb/KRService in terms of: ARCHITECTURE, USER INTERFACE, RESOURCE FUNCTIONALITIES, INTEGRATION FUNCTIONALITIES, SOCIAL FUNCTIONALITIES.The work on this publication has been sponsored by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org

    Answering confucius: The reason why we complicate

    Get PDF
    Learning is a level-progressing process. In any field of study, one must master basic concepts to understand more complex ones. Thus, it is important that during the learning process learners are presented and challenged with knowledge which they are able to comprehend (not a level below, not a level too high). In this work we focus on language learners. By gradually improving (complicating) texts, readers are challenged to learn new vocabulary. To achieve such goals, in this paper we propose and evaluate the 'complicator' that translates given sentences to a chosen level of higher degree of difficulty. The 'complicator' is based on natural language processing and information retrieval approaches that perform lexical replacements. 30 native English speakers participated in a user study evaluating our methods on an expert-tailored dataset of children books. Results show that our tool can be of great utility for language learners who are willing to improve their vocabulary. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40814-4_45.TERENCEEC/FP

    Special issue on: Learning Analytics (Editorial)

    Get PDF
    With the general technological advances of the recent years, current learning environments amass an abundance of data. Albeit such data offer the chance of better understand the learning process, stakeholders – learners, teachers and institutions – often need additional support to make sense of it (Dyckhoff et al., 2013; Macfadyen and Dawson, 2012). The acknowledgement of these needs is at the heart of the recent emergence of Learning Analytics (LA), a research area that draws from multiple disciplines such as educational science, information and computer science, sociology, psychology, statistics and educational data mining (Buckingham Shum and Ferguson, 2012). This multidisciplinarity in LA has motivated the work done by Ferguson (2012), which provides a first review of the drivers, development and challenges of this novel and young research area. Our understanding of learning analytics is based on the definition from the Society for Learning Analytics (SoLAR – Society for Learning Analytics1) which specifies that “Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”. Since 2011, the Horizon reports list Learning Analytics as a hot topic in higher education and indicate the importance of data for this field (Johnson et al., 2011). Learning analytics are able to provide a fresh view on understanding of teaching and learning by observing patterns of complex data (Johnson et al., 2012). Furthermore, it will influence the evolution of higher education in a great measure. Nowadays, learners have access to a huge amount of online information having themselves the possibility of being content creators and information sharers. Therefore the quantity of available information grows in an exponential way, once that each and every citizen can access and produce information. For these purposes, learners have at their disposal many online resources, including LMSs, VLEs, MOOCs and many other online tools that facilitate the learning process and the development of competences. Taking into account these online learning facilities and therefore the learners’ acquisition of knowledge, it is also easier to measure and analyse their experiences by using learning analytics tools. Different online courses and institutions provide dashboards with information about student experiences, flaws and successes. Although the investigation of behavioural specific data makes learning analytics complex, the time comes to utilise personalised learning environments adapted to students learning paths, skills, previous knowledge, competences and motivation
    corecore